No description has been provided for this image

Earth Data Science Coding Challenge!¶

Before we get started, make sure to read or review the guidelines below. These will help make sure that your code is readable and reproducible.

Don't get caught by these Jupyter notebook gotchas¶

No description has been provided for this image

Image source: https://alaskausfws.medium.com/whats-big-and-brown-and-loves-salmon-e1803579ee36

These are the most common issues that will keep you from getting started and delay your code review:

  1. When you try to run some code on GitHub Codespaces, you may be prompted to select a kernel.
    • The kernel refers to the version of Python you are using
    • You should use the base kernel, which should be the default option.
    • You can also use the Select Kernel menu in the upper right to select the base kernel
  2. Before you commit your work, make sure it runs reproducibly by clicking:
    1. Restart (this button won't appear until you've run some code), then
    2. Run All

Check your code to make sure it's clean and easy to read¶

No description has been provided for this image

  • Format all cells prior to submitting (right click on your code).
  • Use expressive names for variables so you or the reader knows what they are.
  • Use comments to explain your code -- e.g.
    # This is a comment, it starts with a hash sign
    

Label and describe your plots¶

Source: https://xkcd.com/833

Make sure each plot has:

  • A title that explains where and when the data are from
  • x- and y- axis labels with units where appropriate
  • A legend where appropriate

Icons: how to use this notebook¶

We use the following icons to let you know when you need to change something to complete the challenge:

  • 💻 means you need to write or edit some code.

  • 📖 indicates recommended reading

  • ✎ marks written responses to questions

  • 🌶 is an optional extra challenge


Habitat modeling under climate change¶

Our changing climate is changing where key grassland species can live, and grassland management and restoration practices will need to take this into account.

Create a habitat suitability model for Sorghastrum nutans, a grass native to North America. In the past 50 years, its range has moved northward. The model will be based on combining multiple data layers related to soil, topography, and climate. You will also demonstrate the coding skills covered in this class by creating a modular, reproducible, object-oriented workflow for the model.

STEP 1: STUDY OVERVIEW¶

Your notebook should include an explanation of why and how you create you workflow. INCLUDE a sentance or two about the function/method and class you created, and why you chose it.

Classes and objects in Python¶

You can think of an object as a collection of named values and functions that make use of those values. When they are part of an object, we call them attributes and methods. Each object is created from a class, or object template. Examples of classes you have used include the pandas DataFrame and xarray DataArray. As an example, xr.DataArray includes attributes like .values, and methods like .mean(). It also contains some special attributes created when the function is initialized for each of the coordinates (e.g. x and y if your DataArray has those as coordinates).

You can absolutely write a reproducible scientific workflow without writing any of your own classes. However, there are some situations where custom classes can come in handy. Consider the following examples:

  • Suppose you are writing code to interface with an API. There are going to be multiple steps, such as logging in, searching for data, and downloading data. Throughout the whole interaction, you want to keep track of the dates and bounding box for the data -- and you might even want to access those values after the data are completely downloaded! If you write a function, you'll run into two problems: 1) the function will be very long, 2) the function may take a long time to run, making it challenging to debug and text, and 3) you'll lose all the named values you define inside the function when it finishes running unless you pass them explicitely as return values. Using a class solves all these problems. You can split up the different parts of the workflow into shorter methods that all have access to the same information, such as bounding box or HTTP Session information.
  • Suppose you are assembling a data cube (aligned and harmonized data from heterogeneous sources). Eventually, you will have a single xarray DataArray or Dataset. However, in the meantime you need to keep track of many data files such that you can access them by name, by attribute, or by order. Often we use the pandas DataFrame for this type of thing. However, it would be nice if the DataFrame did a couple of things differently. For example, when it prints you would like it to display the bounding box of each DataArray instead of the array itself (which is computationally expensive). Or, maybe when you create the DataFrame you would like it to search for all file names that match a pattern and extract key information. You can accomplish these goals by creating a child class to pandas DataFrame that inherits all its capabilities and modifies its methods, perhaps adding some new ones.

As you revisit your habitat modeling workflow, think carefully about at least one class you would like to modify or create.

Using climate model data¶

You will use MACAv2 data for historical and future climate. Be sure to compare at least two emissions scenarios, as well as historic or current climatology as a control. When downloading climate data, bear in mind that:

  • In general, we are interested in climate normals, which are typically calculated from 30 years of data so that they reflect the climate and not a single year. So if you are interested in scenarios around 2050, download at least data from 2035-2065.
  • There is a great deal of uncertainty in climate models. We deal with that by using an ensemble of models to try to capture that uncertainty. For each scenario, you should attempt to include models that are:
    • Warm and wet
    • Warm and dry
    • Cold and wet
    • Cold and dry

You will create a reproducible scientific workflow¶

Your workflow should:

  1. Define your study area: Download the USFS National Grassland Units and select your study sites.
  2. Fit a model: For each grassland:
    1. Download model variables as raster layers covering your study area envelope, including:
      • At least one soil variable from the POLARIS dataset
      • Elevation from the SRTM (available from the APPEEARS API)
      • The 30 year periods of precipitation and temperature from the MACAv2 dataset, accessible from Climate Toolbox. Include an arrangement of emissions scenarios and time periods, following the guidelines above.
    2. Calculate at least one derived **topographic variable** (slope or aspect) to use in your model. You probably will wish to use the xarray-spatial library, which is available in the latest earth-analytics-python environment (but will need to be installed/updated if you are working on your own machine). Note that calculated slope may not be correct if you are using a CRS with units of degrees; you should re-project into a projected coordinate system with units of meters, such as the appropriate UTM Zone.
    3. Harmonize your data - make sure that the grids for each of your layers match up. Check out the ds.rio.reproject_match() method from rioxarray.
    4. Build your model. You can use any model you wish, so long as you explain your choice. However, if you are not sure what to do, we recommend building a fuzzy logic model (see below).
  3. Present your results in at least one figure for each grassland/climate scenario combination.
In [ ]:
# Import packages used for the analysis
import os
from glob import glob
import math
import warnings

import cftime
import earthpy as et
import earthpy.appeears as eaapp
import geopandas as gpd
import geoviews as gv
import holoviews as hv
import hvplot.pandas
import hvplot.xarray
import pandas as pd
import requests
import matplotlib.pyplot as plt
import numpy as np
import rioxarray as rxr
import rioxarray.merge as rxrmerge
import skfuzzy as fuzz
import skfuzzy.control as ctrl
import xarray as xr
import xrspatial
from urllib.parse import urlparse


# I chose to reproject my grasslands to utm zone 13 and 14
# One of the grassland is in the middle of zone 14, while
# the other is in 13.
utm_epsg_13 = 32613
utm_epsg_14 = 32614

# Create data directories 
data_dir = os.path.join(et.io.HOME, et.io.DATA_NAME)

# Ignore RuntimeWarning
warnings.filterwarnings("ignore", category=RuntimeWarning)
In [ ]:
class Grassland:
    def __init__(self, name, utm):
        self.name = name
        self.utm = utm
        self.grassgdf = None
    def load_grass_boundaries(self):
        grasslands_url = (
            "https://data.fs.usda.gov/geodata/edw/edw_resources/shp"
            "/S_USA.NationalGrassland.zip")

        grasslands_gdf = (
            gpd.read_file(grasslands_url)
            .set_index('GRASSLANDN')
        )
        self.grassgdf = (
            grasslands_gdf.loc[[self.name]]
            # .to_crs(self.utm)
        )
        self.polygon = self.grassgdf.to_crs(self.utm)

    def get_site_map(self):
        site = (gv.tile_sources.EsriNatGeo * 
               gv.Polygons(self.grassgdf)
               .opts(fill_alpha=0,
                     line_color='green',
                    line_width=5)).opts(
                                    width=800,
                                    height=600,
                                    title=self.name
                                )
        return site
    def load_site_ph(self):
        polaris_das = []
        bounds = self.grassgdf.total_bounds
        min_lon, min_lat, max_lon, max_lat = bounds

        for lon in range(math.floor(min_lon), math.ceil(max_lon)):
            for lat in range(math.floor(min_lat), math.ceil(max_lat)):
                max_lat, max_lon = lat+1, lon+1
                # print(lat, lon, max_lat, max_lon)
                polaris_url_format = (
                    "http://hydrology.cee.duke.edu/POLARIS/PROPERTIES/v1.0"
                    "/ph/mean/60_100/lat{min_lat}{max_lat}_lon{min_lon}{max_lon}.tif"
                )

                polaris_url = polaris_url_format.format(
                    min_lat=lat, min_lon=lon, max_lat=max_lat, max_lon=max_lon)
                polaris_das.append(
                    rxr.open_rasterio(polaris_url, masked=True).squeeze())
        # print(polaris_das[-2])
        self.ph_da = (
            rxrmerge.merge_arrays(polaris_das)
            .rio.reproject(self.utm)
            .rio.write_crs(self.utm)
            .rio.clip_box(*self.polygon.total_bounds)
        )        

    def download_srtm_data(self):

        """
        Download SRTM raster for a given geometry, start date, and end date
        
        Downloads data from NASA Shuttle Radar Topography Mission (SRTM)
        given a spatial and temporal extent. NASA Shuttle Radar 
        Topography Mission Global 1 arc second launched 
        February 11, 2000 and flew for 11 days.
        <https://lpdaac.usgs.gov/products/srtmgl1v003/>

        Parameters
        ==========
        name : str
        The name used to label the download
        grasspolygon : geopandas.GeoDataFrame
        The geometry to derive the download extent from. 
        Must have a `.envelope` attribute.

        Returns
        =======
        downloader : earthpy.earthexplorer.EarthExplorerDownloader
        Object with information about the download, including the data directory.
        """
        
        print(f'\nGRASSLANDN: {self.name}')
        srtm_downloader = eaapp.AppeearsDownloader(
            download_key = self.name.lower().replace(' ', '-'),
            product='SRTMGL1_NC.003',
            layer='SRTMGL1_DEM',
            start_date='02-11-2000',
            end_date='02-21-2000',
            polygon=self.grassgdf,
            ea_dir=os.path.join(et.io.HOME, 'earth-analytics')
        )
        if not os.path.exists(srtm_downloader.data_dir):
            srtm_downloader.submit_task_request()
            print("SRTM requested and downloading...")
            srtm_downloader.download_files()

            self.srtm_path=(glob(
                os.path.join(
                srtm_downloader.data_dir,
                'SRTMGL1_NC.003*',
                '*.tif')
                ))  
        else:
            self.srtm_path=(glob(
                os.path.join(
                srtm_downloader.data_dir,
                'SRTMGL1_NC.003*',
                '*.tif')
                ))
            print("SRTM data already downloaded.")

    def load_srtm(self):
        """
        Load in and merge downloaded srtm

        Parameters
        ==========
        name : str
        The name used to label the download
        srtm_paths : dict 
            path to srtm files

        Returns
        =======
        srtm_gdf: rxr.DataArray
        DataArray with the srtm data
        """
        
        print(f'\nGRASSLANDN: {self.name}')
        self.grass_srtm_da = (
            rxr.open_rasterio(self.srtm_path[0], masked = True)
            .squeeze()
        ).rio.reproject_match(self.ph_da)
        print(f"Loaded SRTM data for {self.name}")
        return self.grass_srtm_da
    
    def calc_aspect(self):
        """
        Calculate aspect from srtm data

        Parameters
        ==========
        srtm_da : xr.DataArray
            DataArray with the srtm data

        Returns
        =======
        aspect_da: xr.DataArray
            DataArray with the aspect data
        """
        self.aspect_da = xrspatial.aspect(self.grass_srtm_da)
        print(f"Calculated aspect for {self.name}")
        return self.aspect_da
         


oglala = Grassland("Oglala National Grassland", utm_epsg_13)
fort_pierre = Grassland("Fort Pierre National Grassland", utm_epsg_14)
In [ ]:
oglala.load_grass_boundaries()
fort_pierre.load_grass_boundaries()
fort_pierre.grassgdf
fort_pierre.get_site_map()
oglala.get_site_map()
oglala.grassgdf.total_bounds
oglala.load_site_ph()
In [ ]:
oglala.download_srtm_data()
oglala.load_srtm()
oglala.calc_aspect()
fort_pierre.load_site_ph()
fort_pierre.download_srtm_data()
fort_pierre.load_srtm()
fort_pierre.calc_aspect()
GRASSLANDN: Oglala National Grassland
SRTM data already downloaded.

GRASSLANDN: Oglala National Grassland
Loaded SRTM data for Oglala National Grassland
Calculated aspect for Oglala National Grassland

GRASSLANDN: Fort Pierre National Grassland
SRTM data already downloaded.

GRASSLANDN: Fort Pierre National Grassland
Loaded SRTM data for Fort Pierre National Grassland
Calculated aspect for Fort Pierre National Grassland
Out[ ]:
<xarray.DataArray 'aspect' (y: 1160, x: 1121)>
array([[nan, nan, nan, ..., nan, nan, nan],
       [nan, nan, nan, ..., nan, nan, nan],
       [nan, nan, nan, ..., nan, nan, nan],
       ...,
       [nan, nan, nan, ..., nan, nan, nan],
       [nan, nan, nan, ..., nan, nan, nan],
       [nan, nan, nan, ..., nan, nan, nan]], dtype=float32)
Coordinates:
    band         int32 1
    spatial_ref  int32 0
  * x            (x) float64 3.819e+05 3.819e+05 ... 4.148e+05 4.148e+05
  * y            (y) float64 4.904e+06 4.904e+06 4.904e+06 ... 4.87e+06 4.87e+06
Attributes:
    add_offset:     0.0
    AREA_OR_POINT:  Area
    scale_factor:   1.0
    units:          Meters
xarray.DataArray
'aspect'
  • y: 1160
  • x: 1121
  • nan nan nan nan nan nan nan nan ... nan nan nan nan nan nan nan nan
    array([[nan, nan, nan, ..., nan, nan, nan],
           [nan, nan, nan, ..., nan, nan, nan],
           [nan, nan, nan, ..., nan, nan, nan],
           ...,
           [nan, nan, nan, ..., nan, nan, nan],
           [nan, nan, nan, ..., nan, nan, nan],
           [nan, nan, nan, ..., nan, nan, nan]], dtype=float32)
    • band
      ()
      int32
      1
      array(1)
    • spatial_ref
      ()
      int32
      0
      crs_wkt :
      PROJCS["WGS 84 / UTM zone 14N",GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]],PROJECTION["Transverse_Mercator"],PARAMETER["latitude_of_origin",0],PARAMETER["central_meridian",-99],PARAMETER["scale_factor",0.9996],PARAMETER["false_easting",500000],PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH],AUTHORITY["EPSG","32614"]]
      semi_major_axis :
      6378137.0
      semi_minor_axis :
      6356752.314245179
      inverse_flattening :
      298.257223563
      reference_ellipsoid_name :
      WGS 84
      longitude_of_prime_meridian :
      0.0
      prime_meridian_name :
      Greenwich
      geographic_crs_name :
      WGS 84
      horizontal_datum_name :
      World Geodetic System 1984
      projected_crs_name :
      WGS 84 / UTM zone 14N
      grid_mapping_name :
      transverse_mercator
      latitude_of_projection_origin :
      0.0
      longitude_of_central_meridian :
      -99.0
      false_easting :
      500000.0
      false_northing :
      0.0
      scale_factor_at_central_meridian :
      0.9996
      spatial_ref :
      PROJCS["WGS 84 / UTM zone 14N",GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]],PROJECTION["Transverse_Mercator"],PARAMETER["latitude_of_origin",0],PARAMETER["central_meridian",-99],PARAMETER["scale_factor",0.9996],PARAMETER["false_easting",500000],PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH],AUTHORITY["EPSG","32614"]]
      GeoTransform :
      381903.5669381353 29.342022873501627 0.0 4904088.240278134 0.0 -29.342022873502227
      array(0)
    • x
      (x)
      float64
      3.819e+05 3.819e+05 ... 4.148e+05
      axis :
      X
      long_name :
      x coordinate of projection
      standard_name :
      projection_x_coordinate
      units :
      metre
      array([381918.23795 , 381947.579972, 381976.921995, ..., 414722.619522,
             414751.961545, 414781.303568])
    • y
      (y)
      float64
      4.904e+06 4.904e+06 ... 4.87e+06
      axis :
      Y
      long_name :
      y coordinate of projection
      standard_name :
      projection_y_coordinate
      units :
      metre
      array([4904073.569267, 4904044.227244, 4904014.885221, ..., 4870124.848802,
             4870095.506779, 4870066.164756])
    • x
      PandasIndex
      PandasIndex(Index([381918.23794957204, 381947.57997244556,  381976.9219953191,
             382006.26401819254, 382035.60604106606,  382064.9480639396,
              382094.2900868131, 382123.63210968656,  382152.9741325601,
              382182.3161554336,
             ...
             414517.22536203236,  414546.5673849059, 414575.90940777934,
             414605.25143065286,  414634.5934535264, 414663.93547639984,
             414693.27749927336,  414722.6195221469, 414751.96154502034,
             414781.30356789386],
            dtype='float64', name='x', length=1121))
    • y
      PandasIndex
      PandasIndex(Index([ 4904073.569266697,  4904044.227243824, 4904014.8852209505,
              4903985.543198077,  4903956.201175204,  4903926.859152329,
              4903897.517129456,  4903868.175106582,  4903838.833083709,
              4903809.491060835,
             ...
               4870330.24296217,  4870300.900939297, 4870271.5589164235,
               4870242.21689355,  4870212.874870677,  4870183.532847803,
               4870154.19082493,  4870124.848802056,  4870095.506779182,
              4870066.164756308],
            dtype='float64', name='y', length=1160))
  • add_offset :
    0.0
    AREA_OR_POINT :
    Area
    scale_factor :
    1.0
    units :
    Meters
In [ ]:
fort_pierre.aspect_da.hvplot(x='x', y='y', 
                        colormap="colorwheel",
                        title="Aspect at Oglala National Grassland")
Out[ ]:
In [ ]:
oglala.ph_da.hvplot() * oglala.polygon.hvplot(color=None, line_color='white')
Out[ ]:
In [ ]:
fort_pierre.grassgdf.total_bounds
Out[ ]:
array([-100.47626475,   43.97728106, -100.06625771,   44.28162497])
In [ ]:
oglala.grassgdf.total_bounds
Out[ ]:
array([-104.05313982,   42.74092908, -103.36517901,   43.00177636])

STEP 2: SITE SELECTION AND MAPS¶

In [ ]:
oglala.get_site_map()
Out[ ]:
In [ ]:
fort_pierre.get_site_map()
Out[ ]:

STEP 3: DOWNLOAD DATA AND SELECT VARIABLES AND SCENARIOS¶

To select your variables, I recommend first downloading the MACAv2 data for many models but for a single point at the center of your site. This will allow you to determine which climatology category each model fits into (e.g. warm/cold and wet/dry). You can either experiment with reproducible code using OpenDAP, or use one of the subsetting web tools and include the .csv files in your repository. It should be a reasonable amount of data to include on GitHub.

This step should include an EXPLANATION of the variables and scenarios you chose, backed up by research and evidence.

In [ ]:
# Open csv datafile that the FutureScatterTool outputs
# Find the line where the data starts
# Data Source: Future Scatter Tool in the Climate Toolbox 
# (https://climatetoolbox.org/tool/future-scatter)


# List of downloaded CSV file names
csv_files = ['data_FutureScatterTool_rcp45.csv', 'data_FutureScatterTool_rcp85.csv']

# Initialize an empty list to store the dataframes
scatter_dfs = []

for file in csv_files:
    with open(file, 'r') as f:
        lines = f.readlines()

    # Find the line number where the pattern is
    start_line = next(
        i for i, line in enumerate(lines) 
        if "********************************************************" in line)

    # Read the CSV file, skipping the lines before the data
    df = pd.read_csv(file, skiprows=start_line + 1)

    # Append the dataframe to the list
    scatter_dfs.append(df)
In [ ]:
# # Get the first DataFrame from the list
# rcp45_ensemble = scatter_dfs[0]
# rcp45_ensemble.columns = rcp45_ensemble.columns.str.strip()

# # Create a scatter plot of the 'X SCEN' and 'Y SCEN' values
# scatter_plot = rcp45_ensemble.hvplot.scatter(
#     x='X SCEN', y='Y SCEN', 
#     title='RCP 4.5 Ensemble Scatter for 2040-2069', 
#     xlabel='Annual Average Max Temp (C)', 
#     ylabel='Annual Average Precip (mm)', 
#     hover_cols='Model')

# # Calculate the average of 'X SCEN' and 'Y SCEN'
# x_avg = rcp45_ensemble['X SCEN'].mean()
# y_avg = rcp45_ensemble['Y SCEN'].mean()

# # Create line plots for the averages
# x_avg_line = hv.VLine(x_avg).opts(color='red', line_width=1)
# y_avg_line = hv.HLine(y_avg).opts(color='red', line_width=1)

# # Overlay the average lines on the scatter plot
# scatter_plot = scatter_plot * x_avg_line * y_avg_line

# # Display the plot
# scatter_plot
In [ ]:
import panel as pn

# Create a list to store the plots
plots = []

# Create a list of titles
titles = ['RCP 4.5 Ensemble Scatter for 2040-2069', 'RCP 8.5 Ensemble Scatter for 2040-2069']

# Loop over the DataFrames in scatter_dfs and the titles
for df, title in zip(scatter_dfs, titles):
    # Strip whitespace from column names
    df.columns = df.columns.str.strip()

    # Create a scatter plot of the 'X SCEN' and 'Y SCEN' values
    scatter_plot = df.hvplot.scatter(
        x='X SCEN', y='Y SCEN', 
        title=title, 
        xlabel='Annual Average Max Temp (C)', 
        ylabel='Annual Average Precip (mm)', 
        hover_cols='Model')

    # Calculate the average of 'X SCEN' and 'Y SCEN'
    x_avg = df['X SCEN'].mean()
    y_avg = df['Y SCEN'].mean()

    # Create line plots for the averages
    x_avg_line = hv.VLine(x_avg).opts(color='red', line_width=1)
    y_avg_line = hv.HLine(y_avg).opts(color='red', line_width=1)

    # Overlay the average lines on the scatter plot
    scatter_plot = scatter_plot * x_avg_line * y_avg_line

    # Add the plot to the list
    plots.append(scatter_plot)

# Combine the plots into a Panel layout
layout = pn.Column(*plots)

# Display the layout
layout
Out[ ]:
BokehModel(combine_events=True, render_bundle={'docs_json': {'5dad3c8a-21e9-41da-ad3c-1f517785afb7': {'version…
In [ ]:
# Save the layout to an HTML file
layout.save('ensembe_scatter.html')
In [ ]:
# Create a list to store the plots
model_plots = []

# Define the models that DRY-COLD, WET-COLD, DRY-WARM, and WET-WARM (Russia, USA, Japan, Japan)
models = ['inmcm4', 'GFDL-ESM2M', 'MIROC5', 'MIROC-ESM']  # Replace with your model names

# Loop over the DataFrames in scatter_dfs and the titles
for df, title in zip(scatter_dfs, titles):
    # Strip whitespace from column names
    df.columns = df.columns.str.strip()

    # Create a scatter plot for the models to highlight
    highlight_plot = df[df['Model'].isin(models)].hvplot.scatter(
        x='X SCEN', y='Y SCEN', 
        title=title, 
        xlabel='Annual Average Max Temp (C)', 
        ylabel='Annual Average Precip (mm)', 
        hover_cols='Model',
        color='red')  # Color the points red

    # Create a scatter plot for the rest of the models
    rest_plot = df[~df['Model'].isin(models)].hvplot.scatter(
        x='X SCEN', y='Y SCEN', 
        hover_cols='Model',
        color='blue')  # Color the points blue

    # Overlay the two plots
    model_plot = rest_plot * highlight_plot

    # Calculate the average of 'X SCEN' and 'Y SCEN'
    x_avg = df['X SCEN'].mean()
    y_avg = df['Y SCEN'].mean()

    # Create line plots for the averages
    x_avg_line = hv.VLine(x_avg).opts(color='black', line_width=1)
    y_avg_line = hv.HLine(y_avg).opts(color='black', line_width=1)

    # Overlay the average lines on the scatter plot
    model_plot = model_plot * x_avg_line * y_avg_line

    # Add the plot to the list
    model_plots.append(model_plot)

# Combine the plots into a Panel layout
layout_model = pn.Column(*model_plots)

# Display the layout
layout_model
Out[ ]:
BokehModel(combine_events=True, render_bundle={'docs_json': {'5cbe6352-4661-4ce2-ba66-9e325be79a3b': {'version…
In [ ]:
# Save the layout to an HTML file
layout_model.save('chosen_ensembe_scatter.html')

STEP 4: PROCESS AND HARMONIZE DATA¶

In [ ]:
# Choose the 4 located models and download the raster files for the same rectangle that covers both grasslands
# clip the raster files to the grasslands
# make sure that crs and all other aspets match

# harmonize with other data
In [ ]:
# Open the macav2metdata_urls.txt file with the chosen models with monthly data
# for the same rectangle that covers both grasslands
# precipitation and tasmax for rcp4.5/8.8 scenarios for 2040-2069
# from https://climate.northwestknowledge.net/MACA/data_portal.php

with open('macav2metdata_urls.txt', 'r') as file:
    # Read the lines of the file into a list
    dataset_urls = file.readlines()

# Now the variable 'dataset_urls' is a list where each element is a line from the file
type(dataset_urls)
Out[ ]:
list
In [ ]:
maca_path = os.path.join(data_dir, 'maca')

os.makedirs(maca_path, exist_ok=True)

# Remove the newline character from the URL
dataset_urls = [url.strip() for url in dataset_urls]

# Loop over the URLs in the list
for url in dataset_urls:
    # Parse the URL and rebuild it without the query parameters
    parsed_url = urlparse(url)
    clean_url = parsed_url.scheme + "://" + parsed_url.netloc + parsed_url.path

    # Get the file name from the cleaned URL
    file_name = clean_url.split('/')[-1].strip()

    # Define the path to save the file
    save_path = os.path.join(maca_path, file_name)

    # Check if the file already exists
    if os.path.exists(save_path):
        print(f'File {file_name} already exists.')
        continue

    # Send a GET request to the URL
    response = requests.get(url, stream=True)

    # Check if the request was successful
    if response.status_code == 200:
        # Open the save file in write mode
        with open(save_path, 'wb') as file:
            file.write(response.content)
    else:
        print(f'Failed to download file from {url}')
File agg_macav2metdata_pr_GFDL-ESM2M_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_pr_GFDL-ESM2M_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_tasmax_GFDL-ESM2M_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_tasmax_GFDL-ESM2M_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_pr_inmcm4_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_pr_inmcm4_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_tasmax_inmcm4_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_tasmax_inmcm4_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_pr_MIROC5_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_pr_MIROC5_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_tasmax_MIROC5_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_tasmax_MIROC5_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_pr_MIROC-ESM_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_pr_MIROC-ESM_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_tasmax_MIROC-ESM_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc already exists.
File agg_macav2metdata_tasmax_MIROC-ESM_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc already exists.
File  already exists.
In [ ]:
[f for f in os.listdir(maca_path) if f.endswith('.nc')]

for f in os.listdir(maca_path):
    if f.endswith('.nc'):
        print(f)
agg_macav2metdata_pr_GFDL-ESM2M_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc
agg_macav2metdata_pr_GFDL-ESM2M_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc
agg_macav2metdata_pr_inmcm4_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc
agg_macav2metdata_pr_inmcm4_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc
agg_macav2metdata_pr_MIROC-ESM_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc
agg_macav2metdata_pr_MIROC-ESM_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc
agg_macav2metdata_pr_MIROC5_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc
agg_macav2metdata_pr_MIROC5_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc
agg_macav2metdata_tasmax_GFDL-ESM2M_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc
agg_macav2metdata_tasmax_GFDL-ESM2M_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc
agg_macav2metdata_tasmax_inmcm4_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc
agg_macav2metdata_tasmax_inmcm4_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc
agg_macav2metdata_tasmax_MIROC-ESM_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc
agg_macav2metdata_tasmax_MIROC-ESM_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc
agg_macav2metdata_tasmax_MIROC5_r1i1p1_rcp45_2006_2099_CONUS_monthly.nc
agg_macav2metdata_tasmax_MIROC5_r1i1p1_rcp85_2006_2099_CONUS_monthly.nc
In [ ]:
class Macav2Projection:
    def __init__(self, filepath, var = None, model = None, scenario = None):
        self.filepath = filepath
        self.var = var if var is not None else self.get_variables_from_filename()
        self.model = model if model is not None else self.get_model_from_filename()
        self.scenario = scenario

    def get_variables_from_filename(self):
        var = []
        for f in os.listdir(self.filepath):
            if f.endswith('.nc'):
                # Split the filename on underscore and get the first part
                variable_name = f.split('_')[2]
                var.append(variable_name)
        return var
    
    def get_model_from_filename(self):
        model = []
        for f in os.listdir(self.filepath):
            if f.endswith('.nc'):
                # Split the filename on underscore and get the first part
                model_name = f.split('_')[3]
                model.append(model_name)
        return model
    
    def load_data(self, var):
        da = []
        for f in os.listdir(self.filepath):
            if f.endswith('.nc') and \
                (self.var in f) and \
                    (self.model in f) and \
                        (self.scenario in f):
                load_path = os.path.join(self.filepath, f)
                # Open the dataset
                ds = xr.open_dataset(load_path)
                # Reassign the longitude coordinates
                ds = ds.assign_coords(lon=ds.lon-360)
                da.append(ds)
        da = da[0]
        if var == 'pr':
            data = da.precipitation
        elif var == 'tasmax':
            data = da.air_temperature
        else:
            raise ValueError("Invalid variable. Expected 'pr' or 'tasmax'.")
        self.transformed = data.rio.write_crs("epsg:4326", inplace=True)
        self.transformed = data.rio.set_spatial_dims('lon', 'lat', inplace=True)
        return self.transformed

    def average_annual_projected(self):
        d = self.transformed
        # Convert cftime index to datetime index
        d['time'] = d.indexes['time'].to_datetimeindex()
        annual_sum = d.groupby(d.time.dt.year).sum('time')
        annual_sum = annual_sum.rename({'lon': 'x', 'lat': 'y'})
In [ ]:
def clip_precipitation(data_array, clip_box):
    """
    Convert the cftime index to datetime index, group by year, 
    sum values for each year, and clip the data.

    Parameters:
    data_array (xr.DataArray): DataArray with a cftime index.
    clip_box (tuple): Tuple containing the bounds for clipping (minx, miny, maxx, maxy). 
    Grasslands class self.grassgdf.total_bounds can be used to get the bounds.

    Returns:
    xr.DataArray: DataArray grouped by year with summed values and clipped to the specified bounds.
    """
    # Convert cftime index to datetime index
    data_array['time'] = data_array.indexes['time'].to_datetimeindex()
    
    # Group by year and sum values
    annual_data_array = data_array.groupby('time.year').sum('time')
    annual_data_array = annual_data_array.rename({'lon': 'x', 'lat': 'y'})
    
    # Clip the data array using the clip box
    clipped_data_array = annual_data_array.rio.clip_box(*clip_box)
    
    return clipped_data_array
In [ ]:
def clip_tasmax(data_array, clip_box):
    """
    Convert the cftime index to datetime index, group by year, 
    get max values for each year, and clip the data.

    Parameters:
    data_array (xr.DataArray): DataArray with a cftime index.
    clip_box (tuple): Tuple containing the bounds for clipping (minx, miny, maxx, maxy). 
    Grasslands class self.grassgdf.total_bounds can be used to get the bounds.

    Returns:
    xr.DataArray: DataArray grouped by year with summed values and clipped to the specified bounds.
    """
    # Convert cftime index to datetime index
    data_array['time'] = data_array.indexes['time'].to_datetimeindex()
    
    # Group by year and max values
    annual_data_array = data_array.groupby('time.year').max('time')

    # Convert from K to Celsius
    annual_data_array = annual_data_array - 273.15

    annual_data_array = annual_data_array.rename({'lon': 'x', 'lat': 'y'})

    # Set the CRS
    annual_data_array.rio.write_crs("EPSG:4326", inplace=True)
    
    # Clip the data array using the clip box
    clipped_data_array = annual_data_array.rio.clip_box(*clip_box)
    
    return clipped_data_array
In [ ]:
maca = Macav2Projection(maca_path)

warmwet = Macav2Projection(maca_path, 'pr', 'MIROC-ESM', 'rcp45')
warmwet_pr_rcp45 = warmwet.load_data('pr')
In [ ]:
clip_box_oglala = oglala.grassgdf.total_bounds
annual_warmwet_pr_clipped = clip_precipitation(warmwet_pr_rcp45, clip_box_oglala)
In [ ]:
MIROC_ESM_45_pr_oglala_plot = (annual_warmwet_pr_clipped.mean('year').hvplot(
    title="Projected mean annual precipitation (mm)\n "
    " at Oglala National Grassland in 2050 (, rcp45)",
    xlabel="Longitude",
    ylabel="Latitude"
)).opts(width=500, height=300)
MIROC_ESM_45_pr_oglala_plot
Out[ ]:
In [ ]:
clip_box_oglala = oglala.grassgdf.total_bounds
clip_box_fp = fort_pierre.grassgdf.total_bounds

# Calculate precipitation for Oglala National Grassland
warmwet = Macav2Projection(maca_path, 'pr', 'MIROC-ESM', 'rcp45')
warmwet_pr_rcp45 = warmwet.load_data('pr')
annual_warmwet_pr_clipped = clip_precipitation(warmwet_pr_rcp45, clip_box_oglala)

# Calculate temperature for Oglala National Grassland
warmwet = Macav2Projection(maca_path, 'tasmax', 'MIROC-ESM', 'rcp45')
warmwet_tasmax_rcp45 = warmwet.load_data('tasmax')
annual_warmwet_tasmax_clipped = (
    clip_tasmax(warmwet_tasmax_rcp45, clip_box_oglala))
In [ ]:
color_limits = (34, 37)

MIROC_ESM_45_tasmax_oglala_plot = (
    annual_warmwet_tasmax_clipped.mean('year').hvplot(
    title="Projected annual max air temperature (C)\n "
    "at Oglala National Grassland in 2050",
    xlabel="Longitude",
    ylabel="Latitude",
    cmap='Reds',
    clim=color_limits
)).opts(width=500, height=300)

MIROC_ESM_45_tasmax_oglala_plot
Out[ ]:
In [ ]:
clip_box_oglala = oglala.grassgdf.total_bounds
clip_box_fp = fort_pierre.grassgdf.total_bounds

# Calculate precipitation for Oglala National Grassland
warmdry = Macav2Projection(maca_path, 'pr', 'MIROC5', 'rcp45')
warmdry_pr_rcp45 = warmdry.load_data('pr')
annual_warmdry_pr_clipped = clip_precipitation(warmdry_pr_rcp45, clip_box_oglala)

# Calculate temperature for Oglala National Grassland
warmdry = Macav2Projection(maca_path, 'tasmax', 'MIROC5', 'rcp45')
warmdry_tasmax_rcp45 = warmdry.load_data('tasmax')
annual_warmdry_tasmax_clipped = (
    clip_tasmax(warmdry_tasmax_rcp45, clip_box_oglala))
In [ ]:
clip_box_oglala = oglala.grassgdf.total_bounds
clip_box_fp = fort_pierre.grassgdf.total_bounds

# Calculate precipitation for Oglala National Grassland
colddry = Macav2Projection(maca_path, 'pr', 'inmcm4', 'rcp45')
colddry_pr_rcp45 = colddry.load_data('pr')
annual_colddry_pr_clipped = clip_precipitation(colddry_pr_rcp45, clip_box_oglala)

# Calculate temperature for Oglala National Grassland
colddry = Macav2Projection(maca_path, 'tasmax', 'inmcmc4', 'rcp45')
colddry
colddry_tasmax_rcp45 = colddry.load_data('tasmax')
# annual_colddry_tasmax_clipped = (
#     clip_tasmax(colddry_tasmax_rcp45, clip_box_oglala))
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Cell In[30], line 12
     10 colddry = Macav2Projection(maca_path, 'tasmax', 'inmcmc4', 'rcp45')
     11 colddry
---> 12 colddry_tasmax_rcp45 = colddry.load_data('tasmax')
     13 # annual_colddry_tasmax_clipped = (
     14 #     clip_tasmax(colddry_tasmax_rcp45, clip_box_oglala))

Cell In[21], line 39, in Macav2Projection.load_data(self, var)
     37         ds = ds.assign_coords(lon=ds.lon-360)
     38         da.append(ds)
---> 39 da = da[0]
     40 if var == 'pr':
     41     data = da.precipitation

IndexError: list index out of range
In [ ]:
MIROC_45_pr_oglala_plot = (annual_warmdry_pr_clipped.mean('year').hvplot(
    title="Projected mean annual precipitation (mm)\n "
    " at Oglala National Grassland in 2050 (MIROC5, rcp4.5)",
    xlabel="Longitude",
    ylabel="Latitude"
)).opts(width=500, height=300)
MIROC_45_pr_oglala_plot
Out[ ]:
In [ ]:
MIROC_45_tasmax_oglala_plot = (
    annual_warmdry_tasmax_clipped.mean('year').hvplot(
    title="Projected annual max air temperature (C)\n "
    "at Oglala National Grassland in 2050 (MIROC5, rcp45)",
    xlabel="Longitude",
    ylabel="Latitude",
    cmap='Reds',
    clim=color_limits
)).opts(width=500, height=300)

MIROC_45_tasmax_oglala_plot
Out[ ]:
In [ ]:
# Create a panel layout
layout_macav2 = pn.Column(
    pn.Row(MIROC_ESM_45_pr_oglala_plot,MIROC_45_pr_oglala_plot),
    pn.Row(MIROC_ESM_45_tasmax_oglala_plot, MIROC_45_tasmax_oglala_plot)
)

layout_macav2
# # Display the panel
# layout.servable()
Out[ ]:
BokehModel(combine_events=True, render_bundle={'docs_json': {'2abd7b8e-1992-4748-ac66-3cab7deb5608': {'version…
In [ ]:
layout_macav2.save('macav2models_panel.html')

STEP 5: DEVELOP A FUZZY LOGIC MODEL¶

save changes

A fuzzy logic model is one that is built on expert knowledge rather than training data. You may wish to use the scikit-fuzzy library, which includes many utilities for building this sort of model. In particular, it contains a number of membership functions which can convert your data into values from 0 to 1 using information such as, for example, the maximum, minimum, and optimal values for soil pH.

To train a fuzzy logic habitat suitability model:

  1. Research S. nutans, and find out what optimal values are for each variable you are using (e.g. soil pH, slope, and current climatological annual precipitation).
  2. For each digital number in each raster, assign a continuous value from 0 to 1 for how close that grid square is to the optimum range (1=optimal, 0=incompatible). DO NOT DO THIS WITH A LOOP! A vectorized function that operates on the whole array at once will be much easier and faster.
  3. Combine your layers by multiplying them together. This will give you a single suitability number for each square.
  4. Optionally, you may apply a threshold to make the most suitable areas pop on your map.

Sorghastrum nutans Suitability¶

Aspect¶

Aspect refers to the direction that a slope faces and influences the amount of sunlight and, consequently, the temperature and moisture levels. Here are the general effects of different aspects in the Northern Hemisphere:

  • South-facing slopes: Receive more sunlight, tend to be warmer and drier. Suitable for drought-tolerant species.
  • North-facing slopes: Receive less sunlight, tend to be cooler and moister. Suitable for species that prefer cooler, moister conditions.
  • East-facing slopes: Receive morning sunlight, which tends to be less intense than afternoon sunlight, leading to moderate conditions.
  • West-facing slopes: Receive afternoon sunlight, which tends to be more intense, leading to warmer conditions in the afternoon.

For Sorghastrum nutans:

  • South and West-facing slopes might be less suitable if the area is already prone to drought or high temperatures.
  • North and East-facing slopes are generally more suitable as they provide moderate moisture and temperature conditions, which are favorable for Indiangrass growth.

Precipitation¶

Sorghastrum nutans (Indiangrass) is a native warm-season grass found in prairies and open woodlands. It is adapted to a range of precipitation conditions but thrives best under certain moisture regimes. Understanding the suitable precipitation levels for Indiangrass is crucial for habitat suitability modeling, conservation planning, and land management.

Optimal Precipitation Range¶

  • Annual Precipitation: Sorghastrum nutans typically performs best in regions receiving 25 to 50 inches (635 to 1270 mm) of annual precipitation. This range provides sufficient moisture for optimal growth without causing waterlogging or excessive soil erosion.

Seasonal Distribution¶

  • Growing Season Precipitation: Adequate rainfall during the growing season (late spring to early fall) is essential. Indiangrass benefits from consistent moisture availability during its active growth period. Summer rainfall is particularly important for its growth and development.
  • Dormant Season Precipitation: While Indiangrass is dormant during the winter months, precipitation during this period can help recharge soil moisture levels, preparing the plant for the upcoming growing season.

Drought Tolerance¶

  • Drought Conditions: Sorghastrum nutans is moderately drought-tolerant once established. It can survive periods of low precipitation, but prolonged droughts may reduce its vigor and productivity. Deep root systems help it access moisture from deeper soil layers during dry spells.
In [ ]:
import numpy as np
import rasterio
import skfuzzy as fuzz
from skfuzzy import control as ctrl
from rasterio.transform import from_origin

# Define the universe of discourse for the input variables
precipitation = ctrl.Antecedent(np.arange(0, 1500, 1), 'precipitation')
ph = ctrl.Antecedent(np.arange(4, 9, 0.1), 'ph')
suitability = ctrl.Consequent(np.arange(0, 101, 1), 'suitability')

# Define the membership functions for precipitation (in mm per year)
precipitation['low'] = fuzz.trimf(precipitation.universe, [0, 0, 635])
precipitation['medium'] = fuzz.trimf(precipitation.universe, [635, 750, 1000])
precipitation['high'] = fuzz.trimf(precipitation.universe, [1000, 1150, 1500])

# Define the membership functions for pH
ph['acidic'] = fuzz.trimf(ph.universe, [4, 4, 6])
ph['neutral'] = fuzz.trimf(ph.universe, [5, 7, 8])
ph['alkaline'] = fuzz.trimf(ph.universe, [7, 9, 9])

# Define the membership functions for suitability
suitability['poor'] = fuzz.trimf(suitability.universe, [0, 0, 50])
suitability['fair'] = fuzz.trimf(suitability.universe, [25, 50, 75])
suitability['good'] = fuzz.trimf(suitability.universe, [50, 100, 100])

# Define the fuzzy rules
rule1 = ctrl.Rule(precipitation['low'] & ph['acidic'], suitability['poor'])
rule2 = ctrl.Rule(precipitation['low'] & ph['neutral'], suitability['fair'])
rule3 = ctrl.Rule(precipitation['low'] & ph['alkaline'], suitability['good'])
rule4 = ctrl.Rule(precipitation['medium'] & ph['acidic'], suitability['poor'])
rule5 = ctrl.Rule(precipitation['medium'] & ph['neutral'], suitability['good'])
rule6 = ctrl.Rule(precipitation['medium'] & ph['alkaline'], suitability['good'])
rule7 = ctrl.Rule(precipitation['high'] & ph['acidic'], suitability['fair'])
rule8 = ctrl.Rule(precipitation['high'] & ph['neutral'], suitability['fair'])
rule9 = ctrl.Rule(precipitation['high'] & ph['alkaline'], suitability['good'])

# Create a control system and simulation
suitability_ctrl = ctrl.ControlSystem([rule1, rule2, rule3, rule4, rule5, rule6, rule7, rule8, rule9])
suitability_simulation = ctrl.ControlSystemSimulation(suitability_ctrl)

# Example: Determine suitability for given precipitation and pH levels
precipitation_input = 450
ph_input = 7

# Set inputs and compute
suitability_simulation.input['precipitation'] = precipitation_input
suitability_simulation.input['ph'] = ph_input
suitability_simulation.compute()

# Get the result
suitability_output = suitability_simulation.output['suitability']
print(f'For a precipitation of {precipitation_input} mm and pH of {ph_input}, the suitability is {suitability_output:.2f}')
For a precipitation of 450 mm and pH of 7, the suitability is 50.00
In [ ]:
oglala.ph_da
test = annual_warmwet_pr_clipped.rio.reproject_match(oglala.ph_da)
test.rio.crs
Out[ ]:
CRS.from_epsg(32613)
In [ ]:
# Calculate ph suitability
oglala_ph_range = (oglala.ph_da > 4.8) * (oglala.ph_da < 8)
oglala_ph_range.hvplot(cmap='viridis',
                       title= "Habitat Suitability based on "
                       "pH at Oglala National Grassland")
Out[ ]:
In [ ]:
# Calculate precipitation suitability in mm
oglala_precip_range = (
    (test.mean('year') > 450) 
    # * (annual_warmwet_pr_clipped.mean('year') <1143)
)
oglala_precip_range.hvplot(cmap='Greens',
                           colorbar="boolean",
                           title= "Habitat Suitability based on annual "
                           "precipitation \nat Oglala National Grassland "
                           "(RCP 4.5)"
                          )  
Out[ ]:
In [ ]:
oglala_final = (oglala_ph_range) * (oglala_precip_range)
oglala_map = oglala_final.hvplot(
    cmap='coolwarm',
    title= "Assessed Habitat Suitability at "
            "Oglala National Grassland in 2050")
oglala_map
Out[ ]:

STEP 6: RESULTS AND DISCUSSION¶

Headline:¶

  • Oglala National Grassland and Fort Pierre National Grassland both show relatively high suitability based on pH and projected precipitation in 2050.

Discusssion:¶

  • The high suitability is mostly driven by the pH suitability assessment, since both region for both representative concentration pathways were suitable based on the precipitation projections for 2050. Our current analysis considered four climate models and two RCP scenarios for annual precipitation and annual maximum temperature averaged over a 30-year period (2040-2069), but the assessment can further improve with a larger ensamble on climate model projections for a broader time dimension. I decide to not include the aspect or elevation variables in the final suitability model, since the range of those variables is less discussed in the peer-reviewed literature. The overall high suitability also seems to be supported by the current literature; both grasslands are located in a northern region where the impacts of climate change take longer to alter habitat suitability for Sorghastrum nutans.
In [ ]: